• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modeling and Detecting Orbit Observation Errors Using Statistical Methods

Christopher Y Jang (8918840) 15 June 2020 (has links)
In the globally collaborative effort of maintaining an accurate space catalog, it is of utmost importance for ground tracking stations to provide observations which can be used to update and improve the catalog. However, each tracking station is responsible for viewing thousands of objects in a limited window of time. Limitations in sensor capabilities, human error, and other circumstances inevitably result in erroneous, or unusable, data, but when receiving information from a tracking station, it may be difficult for the end-user to determine a data set's usability. Variables in equipment, environment, and processing create uncertainties when computing the positions and orbits of the satellites. Firstly, this research provides a reference frame for what degrees of errors or biases in equipment translate to different levels of orbital errors after a least squares orbit determination. Secondly, using just an incoming data set's angle error distribution compared to the newly determined orbit, statistical distribution testing is used to determine the validity and usability of the newly received data set. In the context of orbit position uncertainty, users are then able to communicate and relay the uncertainties in the data they share while assessing incoming data for potential sources of error.
12

Analysis and improvement of risk assessment methodology for offshore energy installations : Aspects of environmental impact assessment and as-built subsea cable verification

Olsson, Andreas January 2023 (has links)
In the expansion of offshore sustainable energy systems, there is growing pressure on the environment and permit processes and the accumulation results in much higher total risk for accidents of future assets. Anticipating the problems at the design stage and improving verification is likely to increase energy development and reduce costs. This thesis explores offshore DST (Decision Support Tools) and risk verification of subsea cable assets.For subsea cables, a statistical method is proposed utilizing measurement data together with shipping traffic data (AIS) to estimate the environmental risk and risk of accidents of installed cable assets. This should partially solve issues of improving design using more data and surveys and utilizing mechanical and sensor-specific characteristics to improve the confidence and burial estimation, contrary to today’s methodology. The implication of the two studies of cable burial risk assessment techniques and verification shows how a developed methodology can solve issues for verifying the integrity of an installed asset. Putting our methodology into practice involves many challenges.  For the marine Decision Support Tool (DST) and sustainable energy development, to estimate potential savings if permit processes would be shorter and less burdensome without degrading the quality of the EIA (Environmental Impact Assessment). A method is proposed to model various scenarios of effective savings from the development of a DST to reduce costs spent on EIA permitting by the offshore energy developers. The study of the implication of the marine EIA DST shows a quantifiable estimate of the savings potential for permit processes for sustainable offshore development, and results indicate a need for optimization of DST development, which can be an essential factor in its implementation and success.
13

Obtenção dos níveis de significância para os testes de Kruskal-Wallis, Friedman e comparações múltiplas não-paramétricas. / Obtaining significance levels for Kruskal-Wallis, Friedman and nonparametric multiple comparisons tests.

Pontes, Antonio Carlos Fonseca 29 June 2000 (has links)
Uma das principais dificuldades encontradas pelos pesquisadores na utilização da Estatística Experimental Não-Paramétrica é a obtenção de resultados confiáveis. Os testes mais utilizados para os delineamentos com um fator de classificação simples inteiramente casualizados e blocos casualizados são o de Kruskal-Wallis e o de Friedman, respectivamente. As tabelas disponíveis para estes testes são pouco abrangentes, fazendo com que o pesquisador seja obrigado a recorrer a aproximações. Estas aproximações diferem dependendo do autor a ser consultado, podendo levar a resultados contraditórios. Além disso, tais tabelas não consideram empates, mesmo no caso de pequenas amostras. No caso de comparações múltiplas isto é mais evidente ainda, em especial quando ocorrem empates ou ainda, nos delineamentos inteiramente casualizados onde se tem número diferente de repetições entre tratamentos. Nota-se ainda que os softwares mais utilizados em geral recorrem a aproximações para fornecer os níveis de significância, além de não apresentarem resultados para as comparações múltiplas. Assim, o objetivo deste trabalho é apresentar um programa, em linguagem C, que realiza os testes de Kruskal-Wallis, de Friedman e de comparações múltiplas entre todos os tratamentos (bilateral) e entre os tratamentos e o controle (uni e bilateral) considerando todas as configurações sistemáticas de postos ou com 1.000.000 de configurações aleatórias, dependendo do número total de permutações possíveis. Dois níveis de significância são apresentados: o DW ou MaxDif , baseado na comparação com a diferença máxima dentro de cada configuração e o Geral, baseado na comparação com todas as diferenças em cada configuração. Os valores do nível de significância Geral assemelham-se aos fornecidos pela aproximação normal. Os resultados obtidos através da utilização do programa mostram, ainda, que os testes utilizando as permutações aleatórias podem ser bons substitutos nos casos em que o número de permutações sistemáticas é muito grande, já que os níveis de probabilidade são bastante próximos. / One of the most difficulties for the researchers in using Nonparametric Methods is to obtain reliable results. Kruskal-Wallis and Friedman tests are the most used for one-way layout and for randomized blocks, respectively. Tables available for these tests are not too wild, so the research must use approximate values. These approximations are different, depending on the author and the results can be not similar. Furthermore, these tables do not taking account tied observations, even in the case of small sample. For multiple comparisons, this is more evident, specially when tied observations occur or the number of replications is different. Many softwares like SAS, STATISTICA, S-Plus, MINITAB, etc., use approximation in order to get the significance levels and they do not present results for multiple comparisons. Thus, the aim of this work is to present a routine in C language that runs Kruskal-Wallis, Friedman and multiple comparisons among all treatments (bi-tailed) and between treatment and control (uni and bi-tailed), considering all the systematic configurations of the ranks or with more than 1,000,000 random ones, depending on the total of possible permutations. Two levels of significance are presented: DW or MaxDif, based on the comparison of the maximum difference within each configuration and the Geral, based on the comparison of all differences for each configuration. The Geral values of the significance level are very similar for the normal approximation. The obtaining results through this routine show that, the tests using random permutations can be nice substitutes for the case of the number of systematic permutations is too large, once the levels of probability are very near.
14

Lauko – krūmynų stirnų (Capreolus capreolus) populiacijų būklė ir tvarkymas Vilkaviškio rajono savivaldybės teritorijoje / Field - shrub reo-deer (Capreolus capreolus) population‘s condition and management in the state Vilkaviškio the district municipality

Matonytė, Aldona 15 June 2009 (has links)
Magistro darbe tiriama lauko – krūmynų stirnų (Capreolus capreolus) populiacijos būklė ir tvarkymas Vilkaviškio rajono savivaldybės teritorijoje. Darbo objektas – lauko – krūmynų stirnų populiacijos bei jų naudojimas Vilkaviškio rajono savivaldybės teritorijoje. Darbo tikslas - išnagrinėti lauko-krūmynų stirnų populiacijų būklę Vilkaviškio rajono savivaldybės teritorijoje. Darbo metodai – literatūros loginės ir statistinių duomenų analizė. Darbo rezultatai – Vilkaviškio rajono savivaldybės teritorijos medžioklės ploto vienetai vieni iš palankiausių veistis lauko – krūmynų stirnoms. Šioje teritorijoje buvo sumedžiota 1,39 % stirnų Lietuvos mastu per tiriamus septynerius metus. Kai kuriuose medžioklės ploto vienetuose stirnų populiacija turi tendenciją mažėti. Yra medžioklės plotų, kuriuose dėl išmintingo ūkininkavimo yra palaikoma stabili stirnų populiacija. / This postgraduate thesis examined the field - shrub reo - deer (Capreolus capreolus) population status and management Vilkaviškio the district municipality. Object of the thesis - field - shrub reo - deer population and their use Vilkaviškio the district municipality. Purpose of the thesis – analyse field-shrub Stirnų populations of the state Vilkaviškio the district municipality. Methods of the thesis - logical literature and statistical data anglysis. Rezults of the thesis – Vilkaviškio district units of the territory of the hunting area one of the most favorable reproducing the field – shrub roe deer. In this territory has been hunted the field – shrub roe deer 1.39% level during the seven years in Lithuania. Some of the hunting area units the field – shrub roe deer population has a tendency to decrease. There are hunting areas, where farming is a prudent the field – shrub roe deer maintained a stable population.
15

Obtenção dos níveis de significância para os testes de Kruskal-Wallis, Friedman e comparações múltiplas não-paramétricas. / Obtaining significance levels for Kruskal-Wallis, Friedman and nonparametric multiple comparisons tests.

Antonio Carlos Fonseca Pontes 29 June 2000 (has links)
Uma das principais dificuldades encontradas pelos pesquisadores na utilização da Estatística Experimental Não-Paramétrica é a obtenção de resultados confiáveis. Os testes mais utilizados para os delineamentos com um fator de classificação simples inteiramente casualizados e blocos casualizados são o de Kruskal-Wallis e o de Friedman, respectivamente. As tabelas disponíveis para estes testes são pouco abrangentes, fazendo com que o pesquisador seja obrigado a recorrer a aproximações. Estas aproximações diferem dependendo do autor a ser consultado, podendo levar a resultados contraditórios. Além disso, tais tabelas não consideram empates, mesmo no caso de pequenas amostras. No caso de comparações múltiplas isto é mais evidente ainda, em especial quando ocorrem empates ou ainda, nos delineamentos inteiramente casualizados onde se tem número diferente de repetições entre tratamentos. Nota-se ainda que os softwares mais utilizados em geral recorrem a aproximações para fornecer os níveis de significância, além de não apresentarem resultados para as comparações múltiplas. Assim, o objetivo deste trabalho é apresentar um programa, em linguagem C, que realiza os testes de Kruskal-Wallis, de Friedman e de comparações múltiplas entre todos os tratamentos (bilateral) e entre os tratamentos e o controle (uni e bilateral) considerando todas as configurações sistemáticas de postos ou com 1.000.000 de configurações aleatórias, dependendo do número total de permutações possíveis. Dois níveis de significância são apresentados: o DW ou MaxDif , baseado na comparação com a diferença máxima dentro de cada configuração e o Geral, baseado na comparação com todas as diferenças em cada configuração. Os valores do nível de significância Geral assemelham-se aos fornecidos pela aproximação normal. Os resultados obtidos através da utilização do programa mostram, ainda, que os testes utilizando as permutações aleatórias podem ser bons substitutos nos casos em que o número de permutações sistemáticas é muito grande, já que os níveis de probabilidade são bastante próximos. / One of the most difficulties for the researchers in using Nonparametric Methods is to obtain reliable results. Kruskal-Wallis and Friedman tests are the most used for one-way layout and for randomized blocks, respectively. Tables available for these tests are not too wild, so the research must use approximate values. These approximations are different, depending on the author and the results can be not similar. Furthermore, these tables do not taking account tied observations, even in the case of small sample. For multiple comparisons, this is more evident, specially when tied observations occur or the number of replications is different. Many softwares like SAS, STATISTICA, S-Plus, MINITAB, etc., use approximation in order to get the significance levels and they do not present results for multiple comparisons. Thus, the aim of this work is to present a routine in C language that runs Kruskal-Wallis, Friedman and multiple comparisons among all treatments (bi-tailed) and between treatment and control (uni and bi-tailed), considering all the systematic configurations of the ranks or with more than 1,000,000 random ones, depending on the total of possible permutations. Two levels of significance are presented: DW or MaxDif, based on the comparison of the maximum difference within each configuration and the Geral, based on the comparison of all differences for each configuration. The Geral values of the significance level are very similar for the normal approximation. The obtaining results through this routine show that, the tests using random permutations can be nice substitutes for the case of the number of systematic permutations is too large, once the levels of probability are very near.
16

Statistická přejímka / Statistical inspection

Vyškovský, Jaroslav January 2010 (has links)
This work solving with the statistical inspection verification of big amounts of products imported to Czech Republic. Work it self is designed as general handbook and it will be possible to use it on other products then the model one. This work was elaborated with company which is importing screws for wood industry. We used common standards to define quality requirements of verificated products, and to design statistical inspection plans. The goal of this work is technical and economical evaluation of designed method, which was used on model product.
17

SEGMENTATION OF WHITE MATTER, GRAY MATTER, AND CSF FROM MR BRAIN IMAGES AND EXTRACTION OF VERTEBRAE FROM MR SPINAL IMAGES

PENG, ZHIGANG 02 October 2006 (has links)
No description available.
18

The use of fractal dimension for texture-based enhancement of aeromagnetic data.

Dhu, Trevor January 2008 (has links)
This thesis investigates the potential of fractal dimension (FD) as a tool for enhancing airborne magnetic data. More specifically, this thesis investigates the potential of FD-based texture transform images as tools for aiding in the interpretation of airborne magnetic data. A series of different methods of estimating FD are investigated, specifically: • geometric methods (1D and 2D variation methods and 1D line divider method); • stochastic methods (1D and 2D Hurst methods and 1D and 2D semi-variogram methods), and; • spectral methods (1D and 2D wavelet methods and 1D and 2D Gabor methods). All of these methods are able to differentiate between varying theoretical FD in synthetic profiles. Moreover, these methods are able to differentiate between theoretical FDs when applied to entire profiles or in a moving window along the profile. Generally, the accuracy of the estimated FD improves when window size is increased. Similarly, the standard deviation of estimated FD decreases as window size increases. This result implied that the use of moving window FD estimates will require a trade off between the quality of the FD estimates and the need to use small windows to allow better spatial resolution. Application of the FD estimation methods to synthetic datasets containing simple ramps, ridges and point anomalies demonstrates that all of the 2D methods and most of the 1D methods are able to detect and enhance these features in the presence of up to 20% Gaussian noise. In contrast, the 1D Hurst and line divider methods can not clearly detect these features in as little as 10% Gaussian noise. Consequently, it is concluded that the 1D Hurst and line divider methods are inappropriate for enhancing airborne magnetic data. The application of these methods to simple synthetic airborne magnetic datasets highlights the methods’ sensitivity to very small variations in the data. All of the methods responded strongly to field lines some distance from the causative magnetic bodies. This effect was eliminated through the use of a variety of tolerances that essentially required a minimum level of difference between data points in order for FD to be calculated. Whilst this use of tolerances was required for synthetic datasets, its use was not required for noise corrupted versions of the synthetic magnetic data. The results from applying the FD estimation techniques to the synthetic airborne magnetic data suggested that these methods are more effective when applied to data from the pole. Whilst all of the methods were able to enhance the magnetic anomalies both at the pole and in the Southern hemisphere, the responses of the FD estimation techniques were notably simpler for the polar data. With the exception of the 1D Hurst and line divider methods, all of the methods were also able to enhance the synthetic magnetic data in the presence of 10% Gaussian noise. Application of the FD estimation methods to an airborne magnetic dataset from the Merlinleigh Sub-basin in Western Australia demonstrated their ability to enhance subtle structural features in relatively smooth airborne magnetic data. Moreover, the FD-based enhancements were able to enhance some features of this dataset better than any of the conventional enhancements considered (i.e. an analytic signal, vertical and total horizontal derivatives, and automatic gain control). Most of the FD estimation techniques enhanced similar features to each other. However, the 2D methods generally produced clearer results than their associated 1D methods. In contrast to this result, application of the FD-based enhancements to more variable airborne magnetic data from the Tanami region in the Northern Territory demonstrated that these methods are not as well suited to this style of data. The main conclusion from this work is that FD-based enhancement of relatively smooth airborne magnetic data can provide valuable input into an interpretation process. This suggests that these methods are particularly useful for aiding in the interpretation of airborne magnetic data from regions such as sedimentary basins where the distribution of magnetic sources is relatively smooth and simple. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1339560 / Thesis (Ph.D.) - University of Adelaide, Australian School of Petroleum, 2008
19

Variabilité et incertitudes en géotechnique : de leur estimation à leur prise en compte

Dubost, Julien 08 June 2009 (has links)
L’évolution actuelle de l’ingénierie géotechnique place la maîtrise des risques d’origine géotechnique au cœur de ses objectifs. On constate aussi que la complexité des projets d’aménagement (à travers les objectifs coûts/délais/performances qui sont recherchés) est croissante et que les terrains choisis pour les recevoir présentent, quant à eux, des conditions géotechniques de plus en plus souvent « difficiles ». Ces conditions défavorables se traduisent par une variabilité forte des propriétés des sols, rendant leur reconnaissance et leur analyse plus complexe. Ce travail de thèse traite de la caractérisation de la variabilité naturelle des sols et des incertitudes liées aux reconnaissances géotechniques dans le but de mieux les prendre en compte dans les dimensionnements des ouvrages. Il se positionne dans le contexte de la maîtrise des risques de projet d’origine géotechnique. Les principaux outils statistiques servant à décrire la dispersion des données et leur structuration spatiale (géostatistique), ainsi que des méthodes probabilistes permettant d’utiliser leur résultats dans des calculs, sont présentés sous l’angle de leur application en géotechnique. La démarche est appliquée à un projet de plate-forme ferroviaire. Cette infrastructure a été implantée sur un site géologiquement et géotechniquement complexe, et présente aujourd’hui des déformations importantes dues aux tassements des sols. Une nouvelle analyse des données géotechniques a donc été entreprise. Elles ont, au préalable, été regroupées dans une base de données qui a facilité leur traitement statistique et géostatistique. Leur variabilité statistique et spatiale a été caractérisée permettant une meilleure compréhension du site. Le modèle géologique et géotechnique ainsi établi a ensuite été utilisé pour calculer les tassements. Une démarche en trois temps est proposée : globale, locale et spatialisée permettant une estimation des tassements et de leur incertitude, respectivement, à l’échelle du site, aux points de sondages, et spatialisée sur la zone d’étude. Les résultats montrent clairement l’intérêt des méthodes statistiques et géostatistiques pour la caractérisation des sites complexes et l’élaboration d’un modèle géologique et géotechnique du site adapté. La démarche d’analyse des tassements proposée met en avant le fait que les incertitudes des paramètres se répercutent sur les résultats des calculs de dimensionnement et expliquent le comportement global de l’infrastructure. Ces résultats peuvent se traduire sous forme d’une probabilité de ruine qui peut ensuite être utilisée dans un processus de prise de décision et de management des risques. D’une manière plus large, ce travail de thèse constitue une contribution à l’élaboration et l’analyse des campagnes de reconnaissances géotechniques, en ayant le souci d’identifier, d’évaluer et de prendre en compte la variabilité et les incertitudes des données lors des différentes phases du projet pour permettre une meilleure maîtrise du risque d’origine géotechnique. / The current evolution of the geotechnical engineering places the risk management of geotechnical origin in the heart of its objectives. We also notice that the complexity of the projects of development (through the objectives costs/deadline/performances which are sought) is increasing and that soil chosen to receive them present unusual geotechnical conditions. These unfavourable conditions usually mean a strong variability of the soil properties, which induces soil investigation and data analysis more difficult. This work of thesis deals with the characterization of the natural variability of soils and with the uncertainties dues to geotechnical investigations, with the aim to better take them into account in geotechnical engineering project. This work takes place in the context of the management of the risk of project with geotechnical origin. The main statistical tools used for describe the scattering of the data and their spatial variability (geostatistic), as well as the probabilistic methods enabling to use their results in calculations, are presented under the view of their application in geotechnical design. The approach is applied to a project of railway platform. This infrastructure was located on a site where the geology and the geotechnical conditions are complex, and which present important deformations due to the soil settlements. A new analysis of geotechnical data was started again. First, geotechnical data were included in a database in order to ease their statistical and geostatistical treatment. Their statistical and spatial variability were characterized allowing a better understanding of the site. The geologic and geotechnical model so established was then used to assess the settlement effects. An analysis in three levels is proposed: global, local and spatial, which give means to estimate the settlement values and its uncertainty, respectively, on the scale of the site, on the boring points, and on zone of study according to the spatial connectivity of soil properties. The results clearly show the interest of statistical and geostatiscal methods in characterizing complex sites and in the elaboration of a relevant geologic and geotechnical model. The settlement analysis proposed highlight that the parameter uncertainties are of first importance on the design calculations and explain the global behaviour of the infrastructure. These results can be translated in the form of a reliabilitry analysis which can be then used in a process of decision-making and risk management. In a wider way, this work of thesis contributes toward the elaboration and the analysis of the geotechnical investigations, with the aim to identify, to estimate and to take into account the variability and the uncertainties of the data during the various stages of the project. It leads to better control of the risk of geotechnical origin.
20

數值高程模型誤差偵測之研究 / Study on error detection methods for digital elevation models

林永錞, Lin, Yung Chun Unknown Date (has links)
摘要 本研究主要利用誤差偵測方法發掘數值高程模型中可能出現的高程誤差,藉以提升數值高程模型之高程品質。本研究採用三種誤差偵測方法即參數統計、水流方向矩陣、坡度與變化約制等,這三種方法過去是應用在航測資料測製之格網式數值高程模型,本研究嘗試推廣至空載光達製作的數值高程模型。 利用模擬DEM資料以驗證三種偵測方法之偵測能力。首先利用多項式函數擬合出各種地形,並假設該地形無誤差。再將人為誤差隨機加入模擬DEM資料;第二部份則將誤差偵測之方法應用至真實的數值高程模型資料,並配合檢核點高程測量檢驗之。根據誤差偵測結果,參數統計和坡度變化結果類似而且皆有過度偵測之缺點,可透過提高門檻值或高通濾波改善;水流方向矩陣比較不適合誤差偵測,但可透過窪地填平最佳化地形。 關鍵字:數值高程模型、誤差偵測、參數統計法、坡度與變化約制、水流方向矩陣。 / Abstract In this study, error detection methods were proposed to find possible elevation errors in digital elevation model (DEM), and to improve the quality of DEM. Three methods were employed to detect errors in the study, i.e. parametric statistical method, flow direction matrix, and constrained slope and change. These methods can deal with grid DEM from photogrammetric approach in the past, and now the methods are used to find errors in high resolution DEM from light detection and ranging (LIDAR). The simulated DEMs were used to approve the detection capability of the proposed methods. The fitted DEMs were first obtained by polynomial functions fit the different terrains and assuming these DEMs were free of errors. Then the artificial errors were added to fitted DEMs. The proposed methods were also applied to real DEM data got from LIDAR and field check works were run to insure the results. The results of parametric statistical method and constrained slope and change are similar, and all show the over-detection of errors. These results can be improved by using high threshold or high-pass filter. Flow direction matrix is not suitable for error detection in DEM, but can be applied to fill sink to optimize terrain for watershed analysis. Keyword: digital elevation model, error detection, parametric statistical method, constrained slope and change, flow direction matrix.

Page generated in 0.0827 seconds