• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 62
  • 11
  • 10
  • 10
  • 6
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 47
  • 45
  • 39
  • 31
  • 25
  • 20
  • 19
  • 18
  • 17
  • 16
  • 16
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Planejamento de redes horizontais por simulações numéricas

Guzatto, Matheus Pereira January 2017 (has links)
Embora o tema planejamento de redes geodésicas seja largamente investigado, especialmente a partir da segunda metade da década de 70, no âmbito nacional, poucos estudos são encontrados relativos ao planejamento de redes geodésicas, especialmente por meio de simulações numéricas. Recentemente, KLEIN (2014) propôs um método para o planejamento de redes geodésicas (denominado aqui de Método Klein – MK), solucionado por meio de tentativa e erro. Dentro desse contexto objetivo deste trabalho é propor melhorias e adaptar o MK para redes horizontais por meio de simulações numéricas, algo que ainda não é encontrado na Literatura aplicado à otimização de redes. No referido método, cada vez que a rede é reprovada em algum dos critérios considerados, necessita-se de um incremento feito com base na expertise do usuário. Neste trabalho foi desenvolvido um programa (em código aberto) para tornar o método independente de decisões por parte do usuário com o objetivo de tornar o MK viável. Enquanto o geodesista testa decisões em um espaço limitado de opções (por tentativa e erro), a proposta desenvolvida nessa pesquisa testa à exaustão todas as possibilidades do problema por simulações numéricas. Para isso, o usuário deve informar, além dos parâmetros considerados no MK, as seguintes informações: as coordenadas dos pontos de controle (suas precisões e direção(ões) do(os) azimute(es)); as coordenadas aproximadas dos pontos desconhecidos; quais observações serão usadas inicialmente; possíveis novas observações e, por fim, os equipamentos disponíveis. Foram implementadas três estratégias visando minimizar o custo na etapa de planejamento, em ordem crescente de custo, são elas: repetição das observações originalmente propostas (E1); adição de novas visadas (E2) e troca de equipamento por outro de maior precisão (E3). O programa desenvolvido foi testado em três experimentos usando dados provenientes de uma rede real implantada no entorno do campus Florianópolis do Instituto Federal de Santa Catarina e simulando o uso de três equipamentos distintos. Os resultados obtidos mostram que as adaptações tornaram o MK viável do ponto de vista prático e os objetivos propostos foram concluídos com sucesso. Entre as conclusões obtidas deve-se citar: existe uma limitação para incerteza final da rede em função do equipamento utilizado; as visadas adicionais devem ser combinadas de forma a diminuir o número de estações entre os pontos de controle e os vértices desconhecidos da rede; a melhor maneira de aumentar consideravelmente o nível de confiabilidade de uma observação é repetindo-a. Por fim, são feitas considerações sobre as limitações do método proposto: dificuldade do usuário em encontrar valores ideais para a incerteza final da rede; interface visual pouco amigável; método limitado a redes horizontais; e ausência da variável custo de maneira quantitativa na etapa de planejamento. / Although the topic of geodetic network planning has been extensively investigated, especially since the second half of the 1970s, at the national level, few studies have been carried out regarding the planning of geodetic networks, especially through numerical simulations. Recently, KLEIN (2014) proposed a method for the planning of geodesic networks (here called Klein - MK Method), solved by trial and error. Within this context, the objective of this work is to propose improvements and to adapt the MK to horizontal networks through numerical simulations, something that is not yet found in Literature applied to network optimization. In Klein’s work, each time the network is disapproved in any of the considered criteria, an increment is necessary based on the user's expertise. In this work, a program (open source) was developed to make the method independent of user’s decisions with the objective of disseminating the Klein Method (MK) in the related community. While the geodesist tests decisions in a limited range of options (by trial and error), the proposal developed in this paper exhausts all possibilities of the problem by numerical simulations. For this, the user must inform, in addition to the parameters considered in the MK, the following information: coordinates control points (their precisions and direction (s) of the azimuth (s)); approximate coordinates of the unknown points; which observations will be used initially; new possible observations and, finally, the available equipments. Three strategies were implemented in order to minimize the cost of the designing step, in order of increasing cost, they are: repetition of the originally proposed observations (E1); Addition of new sights (E2) and exchange of equipment for another one of better precision (E3). The developed program was tested in three experiments using data from a real network implanted in the surroundings of Florianópolis campus of the Federal Institute of Santa Catarina and simulating the use of three different equipments. Satisfactory results were obtained and the proposed objectives were successfully completed. Among the conclusions obtained should be mentioned: there is a limitation for the final accuracy of the network according to the equipment used; additional sights must be combined in such a way to reduce the number of stations between control points and unknown vertices of the network; The best way to greatly increase the level of reliability of an observation is by repeating it. Finally, considerations are made about the limitations of the proposed method: the difficulty of the user to find ideal values for the final uncertainty of the network; Unfriendly visual interface; Method limited to horizontal networks and absence of quantitative cost analysis in the planning step.
122

Planejamento de redes horizontais por simulações numéricas

Guzatto, Matheus Pereira January 2017 (has links)
Embora o tema planejamento de redes geodésicas seja largamente investigado, especialmente a partir da segunda metade da década de 70, no âmbito nacional, poucos estudos são encontrados relativos ao planejamento de redes geodésicas, especialmente por meio de simulações numéricas. Recentemente, KLEIN (2014) propôs um método para o planejamento de redes geodésicas (denominado aqui de Método Klein – MK), solucionado por meio de tentativa e erro. Dentro desse contexto objetivo deste trabalho é propor melhorias e adaptar o MK para redes horizontais por meio de simulações numéricas, algo que ainda não é encontrado na Literatura aplicado à otimização de redes. No referido método, cada vez que a rede é reprovada em algum dos critérios considerados, necessita-se de um incremento feito com base na expertise do usuário. Neste trabalho foi desenvolvido um programa (em código aberto) para tornar o método independente de decisões por parte do usuário com o objetivo de tornar o MK viável. Enquanto o geodesista testa decisões em um espaço limitado de opções (por tentativa e erro), a proposta desenvolvida nessa pesquisa testa à exaustão todas as possibilidades do problema por simulações numéricas. Para isso, o usuário deve informar, além dos parâmetros considerados no MK, as seguintes informações: as coordenadas dos pontos de controle (suas precisões e direção(ões) do(os) azimute(es)); as coordenadas aproximadas dos pontos desconhecidos; quais observações serão usadas inicialmente; possíveis novas observações e, por fim, os equipamentos disponíveis. Foram implementadas três estratégias visando minimizar o custo na etapa de planejamento, em ordem crescente de custo, são elas: repetição das observações originalmente propostas (E1); adição de novas visadas (E2) e troca de equipamento por outro de maior precisão (E3). O programa desenvolvido foi testado em três experimentos usando dados provenientes de uma rede real implantada no entorno do campus Florianópolis do Instituto Federal de Santa Catarina e simulando o uso de três equipamentos distintos. Os resultados obtidos mostram que as adaptações tornaram o MK viável do ponto de vista prático e os objetivos propostos foram concluídos com sucesso. Entre as conclusões obtidas deve-se citar: existe uma limitação para incerteza final da rede em função do equipamento utilizado; as visadas adicionais devem ser combinadas de forma a diminuir o número de estações entre os pontos de controle e os vértices desconhecidos da rede; a melhor maneira de aumentar consideravelmente o nível de confiabilidade de uma observação é repetindo-a. Por fim, são feitas considerações sobre as limitações do método proposto: dificuldade do usuário em encontrar valores ideais para a incerteza final da rede; interface visual pouco amigável; método limitado a redes horizontais; e ausência da variável custo de maneira quantitativa na etapa de planejamento. / Although the topic of geodetic network planning has been extensively investigated, especially since the second half of the 1970s, at the national level, few studies have been carried out regarding the planning of geodetic networks, especially through numerical simulations. Recently, KLEIN (2014) proposed a method for the planning of geodesic networks (here called Klein - MK Method), solved by trial and error. Within this context, the objective of this work is to propose improvements and to adapt the MK to horizontal networks through numerical simulations, something that is not yet found in Literature applied to network optimization. In Klein’s work, each time the network is disapproved in any of the considered criteria, an increment is necessary based on the user's expertise. In this work, a program (open source) was developed to make the method independent of user’s decisions with the objective of disseminating the Klein Method (MK) in the related community. While the geodesist tests decisions in a limited range of options (by trial and error), the proposal developed in this paper exhausts all possibilities of the problem by numerical simulations. For this, the user must inform, in addition to the parameters considered in the MK, the following information: coordinates control points (their precisions and direction (s) of the azimuth (s)); approximate coordinates of the unknown points; which observations will be used initially; new possible observations and, finally, the available equipments. Three strategies were implemented in order to minimize the cost of the designing step, in order of increasing cost, they are: repetition of the originally proposed observations (E1); Addition of new sights (E2) and exchange of equipment for another one of better precision (E3). The developed program was tested in three experiments using data from a real network implanted in the surroundings of Florianópolis campus of the Federal Institute of Santa Catarina and simulating the use of three different equipments. Satisfactory results were obtained and the proposed objectives were successfully completed. Among the conclusions obtained should be mentioned: there is a limitation for the final accuracy of the network according to the equipment used; additional sights must be combined in such a way to reduce the number of stations between control points and unknown vertices of the network; The best way to greatly increase the level of reliability of an observation is by repeating it. Finally, considerations are made about the limitations of the proposed method: the difficulty of the user to find ideal values for the final uncertainty of the network; Unfriendly visual interface; Method limited to horizontal networks and absence of quantitative cost analysis in the planning step.
123

Diagnostico de influencia em modelos de volatilidade estocastica / Influence diagnostics in stochastic volatility models

Martim, Simoni Fernanda 14 August 2018 (has links)
Orientadores: Mauricio Enrique Zevallos Herencia, Luiz Koodi Hotta / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-14T12:07:35Z (GMT). No. of bitstreams: 1 Martim_SimoniFernanda_M.pdf: 2441806 bytes, checksum: 4d34450ac590270c90e7eb66a293b51b (MD5) Previous issue date: 2009 / Resumo: O diagnóstico de modelos é uma etapa fundamental para avaliar a qualidade do ajuste dos modelos. Nesse sentido, uma das ferramentas de diagnóstico mais importantes é a análise de influência. Peña (2005) introduziu uma forma de analisar a influência em modelos de regressão, a qual avalia como cada ponto é influenciado pelos outros na amostra. Essa estratégia de diagnóstico foi adaptada por Hotta e Motta (2007) na análise de influência dos modelos de volatilidade estocástica univariados. Nesta dissertação, é realizado um estudo de diagnóstico de influência para modelos de volatilidade estocástica univariados assimétricos, assim como para modelos de volatilidade estocástica multivariados. As metodologias propostas são ilustradas através da análise de dados simulados e séries reais de retornos financeiros. / Abstract: Model diagnostics is a key step to assess the quality of fitted models. In this sense, one of the most important tools is the analysis of influence. Peña (2005) introduced a way of assessing influence in linear regression models, which evaluates how each point is influenced by the others in the sample. This diagnostic strategy was adapted by Hotta and Motta (2007) on the influence analysis of univariate stochastic volatility models. In this dissertation, it is performed a study of influence diagnostics of asymmetric univariate stochastic volatility models as well as multivariate stochastic volatility models. The proposed methodologies are illustrated through the analysis of simulated data and financial time series returns. / Mestrado / Series Temporais Financeiras / Mestra em Estatística
124

Détection de données aberrantes appliquée à la localisation GPS / Outliers detection applied to GPS localization

Zair, Salim 07 October 2016 (has links)
Dans cette thèse, nous nous intéressons au problème de détection de mesures GPS erronées. En effet, en zones urbaines, les acquisitions sont fortement dégradées par des phénomènes de multi-trajets ou de multiples réflexions des signaux avant d’arriver à l’antenne réceptrice. En forêt, de multiples obstacles bloquent les signaux satellites, ce qui diminue la redondance des mesures. Alors que les algorithmes présents dans les récepteurs GPS détectent au maximum une mesure erronée par pas de temps, avec une combinaison de différents systèmes de navigation, l’hypothèse d’une seule erreur à la fois n’est plus tenable et la détection et gestion des données erronées (défaillantes, aberrantes ou outliers selon les différentes terminologies) représente un enjeu majeur dans les applications de navigation autonome et de localisation robuste et devient un nouveau défi technologique.La contribution principale de cette thèse est un algorithme de détection de mesures de pseudo-distances aberrantes exploitant la modélisation a contrario. Deux critères fondés sur l’espérance du nombre de fausses alarmes (NFA) sont utilisés pour mesurer la cohérence d’un ensemble de mesures sous l’hypothèse d’un modèle de bruit.Notre seconde contribution concerne l’introduction des mesures Doppler dans le processus de localisation. Nous étendons la détection d’outliers conjointement dans les mesures de pseudo-distance aux mesures Doppler et proposons une localisation par couplage avec le filtre particulaire soit SIR soit de Rao-Blackwell qui permet d’estimer analytiquement la vitesse.Notre troisième contribution est une approche crédibiliste pour la détection des mesures aberrantes dans les pseudo-distances. S’inspirant du RANSAC, nous choisissons, parmi les combinaisons d’observations possibles, la plus compatible selon une mesure de cohérence ou d’incohérence. Une étape de filtrage évidentiel permet de tenir compte de la solution précédente. Les approches proposées donnent de meilleures performances que les méthodes usuelles et démontrent l’intérêt de retirer les données aberrantes du processus de localisation. / In this work, we focus on the problem of detection of erroneous GPS measurements. Indeed, in urban areas, acquisitions are highly degraded by multipath phenomena or signal multiple reflections before reaching the receiver antenna. In forest areas, the satellite occlusion reduces the measurements redundancy. While the algorithms embedded in GPS receivers detect at most one erroneous measurement per epoch, the hypothesis of a single error at a time is no longer realistic when we combine data from different navigation systems. The detection and management of erroneous data (faulty, aberrant or outliers depending on the different terminologies) has become a major issue in the autonomous navigation applications and robust localization and raises a new technological challenge.The main contribution of this work is an outlier detection algorithm for GNSS localization with an a contrario modeling. Two criteria based on number of false alarms (NFA) are used to measure the consistency of a set of measurements under the noise model assumption.Our second contribution is the introduction of Doppler measurements in the localization process. We extend the outlier detection to both pseudo-ranges and Doppler measurements, and we propose a coupling with either the particle filter SIR or the Rao-Blackwellized particle filter that allows us to estimate analytically the velocity.Our third contribution is an evidential approach for the detection of outliers in the pseudo-ranges. Inspired by the RANSAC, we choose among possible combinations of observations, the most compatible one according to a measure of consistency or inconsistency. An evidential filtering step is performed that takes into account the previous solution. The proposed approaches achieve better performance than standard methods and demonstrate the interest of removing the outliers from the localization process.
125

Zavedení a aplikace obecného regresního modelu / The Introduction and Application of General Regression Model

Hrabec, Pavel January 2015 (has links)
This thesis sumarizes in detail general linear regression model, including testing statistics for coefficients, submodels, predictions and mostly tests of outliers and large leverage points. It describes how to include categorial variables into regression model. This model was applied to describe saturation of photographs of bread, where input variables were, type of flour, type of addition and concntration of flour. After identification of outliers it was possible to create mathematical model with high coefficient of determination, which will be usefull for experts in food industry for preliminar identification of possible composition of bread.
126

The Detection of Outlying Fire Service’s Reports: FCA Driven Analytics

Krasuski, Adam, Wasilewski, Piotr 28 May 2013 (has links)
We present a methodology for improving the detection of outlying Fire Service’s reports based on domain knowledge and dialogue with Fire & Rescue domain experts. The outlying report is considered as element which is significantly different from the remaining data. Outliers are defined and searched on the basis of domain knowledge and dialogue with experts. We face the problem of reducing high data dimensionality without loosing specificity and real complexity of reported incidents. We solve this problem by introducing a knowledge based generalization level intermediating between analysed data and experts domain knowledge. In the methodology we use the Formal Concept Analysis methods for both generation appropriate categories from data and as tools supporting communication with domain experts. We conducted two experiments in finding two types of outliers in which outliers detection was supported by domain experts.
127

Robustní odhady autokorelační funkce / Robust estimation of autocorrelation function

Lain, Michal January 2020 (has links)
The autocorrelation function is a basic tool for time series analysis. The clas- sical estimation is very sensitive to outliers and can lead to misleading results. This thesis deals with robust estimations of the autocorrelation function, which is more resistant to the outliers than the classical estimation. There are presen- ted following approaches: leaving out the outliers from the data, replacement the average with the median, data transformation, the estimation of another coeffici- ent, robust estimation of the partial autocorrelation function or linear regression. The thesis describes the applicability of the presented methods, their advantages and disadvantages and necessary assumptions. All the approaches are compared in simulation study and applied to real financial data. 1
128

Testing Structure of Covariance Matrix under High-dimensional Regime

Wu, Jiawei January 2020 (has links)
Statisticians are interested in testing the structure of covariance matrices, especially under the high-dimensional scenario in which the dimensionality of data matrices exceeds the sample size. Many test statistics have been introduced to test whether the covariance matrix is equal to identity structure (<img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?H_%7B01%7D:%20%5CSigma%20=%20I_p" />), sphericity structure (<img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?H_%7B02%7D:%20%5CSigma%20=%20%5Csigma%5E2I_p" />) or diagonal structure (<img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?H_%7B03%7D:%20%5CSigma%20=%20diag(d_1,%20d_2,%5Cdots,d_p)" />). These test statistics work under the assumption that data follows the multivariate normal distribution. In our thesis work, we want to compare the performance of test statistics for each structure test under given assumptions and when the distributional assumption is violated, and compare the test sensitivity to outliers. We apply simulation studies with the help of significance level, power of test, and goodness of fit tests to evaluate the performance of structure test statistics. In conclusion, we identify the recommended test statistics that perform well under different scenarios. Moreover, we find out that the test statistics for the identity structure test are more sensitive to the changes of distribution assumptions and outliers compared with others. The test statistics for the diagonal structure test have a better tolerant to the change of the data matrices.
129

Forecasting Volume of Sales During the Abnormal Time Period of COVID-19. An Investigation on How to Forecast, Where the Classical ARIMA Family of Models Fail / Estimering av försäljningsprognoser under den abnorma tidsperioden av coronapandemin

Ghawi, Christina January 2021 (has links)
During the COVID-19 pandemic, customer shopping habits have changed. Some industries experienced an abrupt shift during the pandemic outbreak while others navigate in new normal states. For some merchants, the highly-uncertain new phenomena of COVID-19 expresses as outliers in time series of volume of sales. As forecasting models tend to replicate past behavior of a series, outliers complicates the procedure of forecasting; the abnormal events tend to unreliably replicate in forecasts of the subsequent year(s). In this thesis, we investigate how to forecast volume of sales during the abnormal time period of COVID-19, where the classical ARIMA family of models produce unreliable forecasts. The research revolved around three time series exhibiting three types of outliers: a level shift, a transient change and an additive outlier. Upon detecting the time period of the abnormal behavior in each series, two experiments were carried out as attempts for increasing the predictive accuracy for the three extreme cases. The first experiment was related to imputing the abnormal data in the series and the second was related to using a combined model of a pre-pandemic and a post-abnormal forecast. The results of the experiments pointed at significant improvement of the mean absolute percentage error at significance level alpha=0.05 for the level shift when using a combined model compared to the pre-pandemic best-fit SARIMA model. Also, at significant improvement for the additive outlier when using a linear impute. For the transient change, the results pointed at no significant improvement in the predictive accuracy of the experimental models compared to the pre-pandemic best-fit SARIMA model. For the purpose of generalizing to large-scale conclusions of methods' superiority or feasibility for particular abnormal behaviors, empirical evaluations are required. The proposed experimental models were discussed in terms of reliability, validity and quality. By residual diagnostics, it was argued that the models were valid; however, that further improvements can be made. Also, it was argued that the models fulfilled desired attributes of simplicity, scaleability and flexibility. Due to the uncertain phenomena of the COVID-19 pandemic, it was suggested not to take the outputs as long-term reliable solutions. Rather, as temporary solutions requiring more frequent updating of forecasts. / Under coronapandemin har kundbeteenden och köpvanor förändrats. I vissa branscher upplevdes ett plötsligt skifte vid pandemiutbrottet och i andra navigerar handlare i nya normaltillstånd. För vissa handlare är förändringarna så pass distinkta att de yttrar sig som avvikelser i tidsserier över försäljningsvolym. Dessa avvikelser komplicerar prognosering. Då prognosmodeller tenderar att replikera tidsseriers tidigare beteenden, tenderas det avvikande beteendet att replikeras i försäljningsprognoser för nästkommande år. I detta examensarbete ämnar vi att undersöka tillvägagångssätt för att estimera försäljningsprognoser under den abnorma tidsperioden av COVID-19, då klassiska tidsseriemodeller felprognoserar. Detta arbete kretsade kring tre tidsserier som uttryckte tre avvikelsertyper: en nivåförskjutning, en övergående förändring och en additiv avvikelse. Efter att ha definierat en specifik tidsperiod relaterat till det abnorma beteendet i varje tidsserie, utfördes två experiment med syftet att öka den prediktiva noggrannheten för de tre extremfallen. Det första experimentet handlade om att ersätta den abnorma datan i varje serie och det andra experimentet handlade om att använda en kombinerad pronosmodell av två estimerade prognoser, en pre-pandemisk och en post-abnorm. Resultaten av experimenten pekade på signifikant förbättring av ett absolut procentuellt genomsnittsfel för nivåförskjutningen vid användande av den kombinerade modellen, i jämförelse med den pre-pandemiskt bäst passande SARIMA-modellen. Även, signifikant förbättring för den additiva avvikelsen vid ersättning av abnorm data till ett motsvarande linjärt polynom. För den övergående förändringen pekade resultaten inte på en signifikant förbättring vid användande av de experimentella modellerna. För att generalisera till storskaliga slutsatser giltiga för specifika avvikande beteenden krävs empirisk utvärdering. De föreslagna modellerna diskuterades utifrån tillförlitlighet, validitet och kvalitet. Modellerna uppfyllde önskvärda kvalitativa attribut såsom enkelhet, skalbarhet och flexibilitet. På grund av hög osäkerhet i den nuvarande abnorma tidsperioden av coronapandemin, föreslogs det att inte se prognoserna som långsiktigt pålitliga lösningar, utan snarare som tillfälliga tillvägagångssätt som regelbundet kräver om-prognosering.
130

Multiple outlier detection and cluster analysis of multivariate normal data

Robson, Geoffrey 12 1900 (has links)
Thesis (MscEng)--Stellenbosch University, 2003. / ENGLISH ABSTRACT: Outliers may be defined as observations that are sufficiently aberrant to arouse the suspicion of the analyst as to their origin. They could be the result of human error, in which case they should be corrected, but they may also be an interesting exception, and this would deserve further investigation. Identification of outliers typically consists of an informal inspection of a plot of the data, but this is unreliable for dimensions greater than two. A formal procedure for detecting outliers allows for consistency when classifying observations. It also enables one to automate the detection of outliers by using computers. The special case of univariate data is treated separately to introduce essential concepts, and also because it may well be of interest in its own right. We then consider techniques used for detecting multiple outliers in a multivariate normal sample, and go on to explain how these may be generalized to include cluster analysis. Multivariate outlier detection is based on the Minimum Covariance Determinant (MCD) subset, and is therefore treated in detail. Exact bivariate algorithms were refined and implemented, and the solutions were used to establish the performance of the commonly used heuristic, Fast–MCD. / AFRIKAANSE OPSOMMING: Uitskieters word gedefinieer as waarnemings wat tot s´o ’n mate afwyk van die verwagte gedrag dat die analis wantrouig is oor die oorsprong daarvan. Hierdie waarnemings mag die resultaat wees van menslike foute, in welke geval dit reggestel moet word. Dit mag egter ook ’n interressante verskynsel wees wat verdere ondersoek benodig. Die identifikasie van uitskieters word tipies informeel deur inspeksie vanaf ’n grafiese voorstelling van die data uitgevoer, maar hierdie benadering is onbetroubaar vir dimensies groter as twee. ’n Formele prosedure vir die bepaling van uitskieters sal meer konsekwente klassifisering van steekproefdata tot gevolg hˆe. Dit gee ook geleentheid vir effektiewe rekenaar implementering van die tegnieke. Aanvanklik word die spesiale geval van eenveranderlike data behandel om noodsaaklike begrippe bekend te stel, maar ook aangesien dit in eie reg ’n area van groot belang is. Verder word tegnieke vir die identifikasie van verskeie uitskieters in meerveranderlike, normaal verspreide data beskou. Daar word ook ondersoek hoe hierdie idees veralgemeen kan word om tros analise in te sluit. Die sogenaamde Minimum Covariance Determinant (MCD) subversameling is fundamenteel vir die identifikasie van meerveranderlike uitskieters, en word daarom in detail ondersoek. Deterministiese tweeveranderlike algoritmes is verfyn en ge¨ımplementeer, en gebruik om die effektiwiteit van die algemeen gebruikte heuristiese algoritme, Fast–MCD, te ondersoek.

Page generated in 0.0446 seconds