• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 17
  • 9
  • 7
  • 7
  • 6
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 168
  • 168
  • 41
  • 41
  • 35
  • 32
  • 29
  • 29
  • 23
  • 22
  • 18
  • 17
  • 17
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Dynamical analysis of respiratory signals for diagnosis of sleep disordered breathing disorders.

Suren Rathnayake Unknown Date (has links)
Sleep disordered breathing (SDB) is a highly prevalent but an under-diagnosed disease. Among adults in the ages between 30 to 60 years, 24% of males and 9% of females show conditions of SDB, while 82% of men and 93% of women with moderate to severe SDB remain undiagnosed. Polysomnography (PSG) is the reference diagnostic test for SDB. During PSG, a number of physiological signals are recorded during an overnight sleep and then manually scored for sleep/wake stages and SDB events to obtain the reference diagnosis. The manual scoring of SDB events is an extremely time consuming and cumbersome task with high inter- and intra-rater variations. PSG is a labour intensive, expensive and patient inconvenient test. Further, PSG facilities are limited leading to long waiting lists. There is an enormous clinical need for automation of PSG scoring and an alternative automated ambulatory method suitable for screening the population. During the work of this thesis, we focus (1) on implementing a framework that enables more reliable scoring of SDB events which also lowers manual scoring time, and (2) implementing a reliable automated screening procedure that can be used as a patient-friendly home based study. The recordings of physiological measurements obtained during patients’ sleep of- ten suffer from data losses, interferences and artefacts. In a typical sleep scoring session, artifact-corrupted signal segments are visually detected and removed from further consideration. We developed a novel framework for automated artifact detection and signal restoration, based on the redundancy among respiratory flow signals. The signals focused on are the airflow (thermistor sensors) and nasal pressure signals that are clinically significant in detecting respira- tory disturbances. We treat the respiratory system as a dynamical system, and use the celebrated Takens embedding theorem as the theoretical basis for sig- nal prediction. In this study, we categorise commonly occurring artefacts and distortions in the airflow and nasal pressure measurements into several groups and explore the efficacy of the proposed technique in detecting/recovering them. Results we obtained from a database of clinical PSG signals indicated that theproposed technique can detect artefacts/distortions with a sensitivity >88% and specificity >92%. This work has the potential to simplify the work done by sleep scoring technicians, and also to improve automated sleep scoring methods. During the next phase of the thesis we have investigated the diagnostic ability of single – and dual–channel respiratory flow measuring devices. Recent studies have shown that single channel respiratory flow measurements can be used for automated diagnosis/screening for sleep disordered breathing (SDB) diseases. Improvements for reliable home-based monitoring for SDB may be achieved with the use of predictors based on recurrence quantification analysis (RQA). RQA essentially measures the complex structures present in a time series and are relatively independent of the nonlinearities present in the respiratory measurements such as those due to breathing nonlinearities and sensor movements. The nasal pressure, thermistor-based airflow, abdominal movement and thoracic movement measurements obtained during Polysomnography, were used in this study to implement an algorithm for automated screening for SDB diseases. The algorithm predicts SDB-affected measurement segments using twelve features based on RQA, body mass index (BMI) and neck circumference using mixture discriminant analysis (MDA). The rate of SDB affected segments of data per hour of recording (RDIS) is used as a measure for the diagnosis of SDB diseases. The operating points to be chosen were the prior probability of SDB affected data segments (π1) and the RDIS threshold value, above which a patient is predicted to have a SDB disease. Cross-validation with five-folds, stratified based on the RDI values of the recordings, was used in estimating the operating points. Sensitivity and specificity rates for the final classifier were estimated using a two-layer assessment approach with the operating points chosen at the inner layer using five-fold cross-validation and the choice assessed at the outer layer using repeated learning-testing. The nasal pressure measurement showed higher accuracy compared to other respiratory measurements when used alone. The nasal pressure and thoracic movement measurements were identified as the best pair of measurements to be used in a dual channel device. The estimated sensitivity and specificity (standard error) in diagnosing SDB disease (RDI ≥ 15) are 90.3(3.1)% and 88.3(5.5)% when nasal pressure is used alone and together with the thoracic movement it was 89.5(3.7)% and 100.0(0.0)%. Present results suggest that RQA of a single respiratory measurement has potential to be used in an automated SDB screening device, while with dual-channel more reliable accuracy can be expected. Improvements may be possible by including other RQA based features and optimisation of the parameters.
62

Dynamical analysis of respiratory signals for diagnosis of sleep disordered breathing disorders.

Suren Rathnayake Unknown Date (has links)
Sleep disordered breathing (SDB) is a highly prevalent but an under-diagnosed disease. Among adults in the ages between 30 to 60 years, 24% of males and 9% of females show conditions of SDB, while 82% of men and 93% of women with moderate to severe SDB remain undiagnosed. Polysomnography (PSG) is the reference diagnostic test for SDB. During PSG, a number of physiological signals are recorded during an overnight sleep and then manually scored for sleep/wake stages and SDB events to obtain the reference diagnosis. The manual scoring of SDB events is an extremely time consuming and cumbersome task with high inter- and intra-rater variations. PSG is a labour intensive, expensive and patient inconvenient test. Further, PSG facilities are limited leading to long waiting lists. There is an enormous clinical need for automation of PSG scoring and an alternative automated ambulatory method suitable for screening the population. During the work of this thesis, we focus (1) on implementing a framework that enables more reliable scoring of SDB events which also lowers manual scoring time, and (2) implementing a reliable automated screening procedure that can be used as a patient-friendly home based study. The recordings of physiological measurements obtained during patients’ sleep of- ten suffer from data losses, interferences and artefacts. In a typical sleep scoring session, artifact-corrupted signal segments are visually detected and removed from further consideration. We developed a novel framework for automated artifact detection and signal restoration, based on the redundancy among respiratory flow signals. The signals focused on are the airflow (thermistor sensors) and nasal pressure signals that are clinically significant in detecting respira- tory disturbances. We treat the respiratory system as a dynamical system, and use the celebrated Takens embedding theorem as the theoretical basis for sig- nal prediction. In this study, we categorise commonly occurring artefacts and distortions in the airflow and nasal pressure measurements into several groups and explore the efficacy of the proposed technique in detecting/recovering them. Results we obtained from a database of clinical PSG signals indicated that theproposed technique can detect artefacts/distortions with a sensitivity >88% and specificity >92%. This work has the potential to simplify the work done by sleep scoring technicians, and also to improve automated sleep scoring methods. During the next phase of the thesis we have investigated the diagnostic ability of single – and dual–channel respiratory flow measuring devices. Recent studies have shown that single channel respiratory flow measurements can be used for automated diagnosis/screening for sleep disordered breathing (SDB) diseases. Improvements for reliable home-based monitoring for SDB may be achieved with the use of predictors based on recurrence quantification analysis (RQA). RQA essentially measures the complex structures present in a time series and are relatively independent of the nonlinearities present in the respiratory measurements such as those due to breathing nonlinearities and sensor movements. The nasal pressure, thermistor-based airflow, abdominal movement and thoracic movement measurements obtained during Polysomnography, were used in this study to implement an algorithm for automated screening for SDB diseases. The algorithm predicts SDB-affected measurement segments using twelve features based on RQA, body mass index (BMI) and neck circumference using mixture discriminant analysis (MDA). The rate of SDB affected segments of data per hour of recording (RDIS) is used as a measure for the diagnosis of SDB diseases. The operating points to be chosen were the prior probability of SDB affected data segments (π1) and the RDIS threshold value, above which a patient is predicted to have a SDB disease. Cross-validation with five-folds, stratified based on the RDI values of the recordings, was used in estimating the operating points. Sensitivity and specificity rates for the final classifier were estimated using a two-layer assessment approach with the operating points chosen at the inner layer using five-fold cross-validation and the choice assessed at the outer layer using repeated learning-testing. The nasal pressure measurement showed higher accuracy compared to other respiratory measurements when used alone. The nasal pressure and thoracic movement measurements were identified as the best pair of measurements to be used in a dual channel device. The estimated sensitivity and specificity (standard error) in diagnosing SDB disease (RDI ≥ 15) are 90.3(3.1)% and 88.3(5.5)% when nasal pressure is used alone and together with the thoracic movement it was 89.5(3.7)% and 100.0(0.0)%. Present results suggest that RQA of a single respiratory measurement has potential to be used in an automated SDB screening device, while with dual-channel more reliable accuracy can be expected. Improvements may be possible by including other RQA based features and optimisation of the parameters.
63

Dynamical analysis of respiratory signals for diagnosis of sleep disordered breathing disorders.

Suren Rathnayake Unknown Date (has links)
Sleep disordered breathing (SDB) is a highly prevalent but an under-diagnosed disease. Among adults in the ages between 30 to 60 years, 24% of males and 9% of females show conditions of SDB, while 82% of men and 93% of women with moderate to severe SDB remain undiagnosed. Polysomnography (PSG) is the reference diagnostic test for SDB. During PSG, a number of physiological signals are recorded during an overnight sleep and then manually scored for sleep/wake stages and SDB events to obtain the reference diagnosis. The manual scoring of SDB events is an extremely time consuming and cumbersome task with high inter- and intra-rater variations. PSG is a labour intensive, expensive and patient inconvenient test. Further, PSG facilities are limited leading to long waiting lists. There is an enormous clinical need for automation of PSG scoring and an alternative automated ambulatory method suitable for screening the population. During the work of this thesis, we focus (1) on implementing a framework that enables more reliable scoring of SDB events which also lowers manual scoring time, and (2) implementing a reliable automated screening procedure that can be used as a patient-friendly home based study. The recordings of physiological measurements obtained during patients’ sleep of- ten suffer from data losses, interferences and artefacts. In a typical sleep scoring session, artifact-corrupted signal segments are visually detected and removed from further consideration. We developed a novel framework for automated artifact detection and signal restoration, based on the redundancy among respiratory flow signals. The signals focused on are the airflow (thermistor sensors) and nasal pressure signals that are clinically significant in detecting respira- tory disturbances. We treat the respiratory system as a dynamical system, and use the celebrated Takens embedding theorem as the theoretical basis for sig- nal prediction. In this study, we categorise commonly occurring artefacts and distortions in the airflow and nasal pressure measurements into several groups and explore the efficacy of the proposed technique in detecting/recovering them. Results we obtained from a database of clinical PSG signals indicated that theproposed technique can detect artefacts/distortions with a sensitivity >88% and specificity >92%. This work has the potential to simplify the work done by sleep scoring technicians, and also to improve automated sleep scoring methods. During the next phase of the thesis we have investigated the diagnostic ability of single – and dual–channel respiratory flow measuring devices. Recent studies have shown that single channel respiratory flow measurements can be used for automated diagnosis/screening for sleep disordered breathing (SDB) diseases. Improvements for reliable home-based monitoring for SDB may be achieved with the use of predictors based on recurrence quantification analysis (RQA). RQA essentially measures the complex structures present in a time series and are relatively independent of the nonlinearities present in the respiratory measurements such as those due to breathing nonlinearities and sensor movements. The nasal pressure, thermistor-based airflow, abdominal movement and thoracic movement measurements obtained during Polysomnography, were used in this study to implement an algorithm for automated screening for SDB diseases. The algorithm predicts SDB-affected measurement segments using twelve features based on RQA, body mass index (BMI) and neck circumference using mixture discriminant analysis (MDA). The rate of SDB affected segments of data per hour of recording (RDIS) is used as a measure for the diagnosis of SDB diseases. The operating points to be chosen were the prior probability of SDB affected data segments (π1) and the RDIS threshold value, above which a patient is predicted to have a SDB disease. Cross-validation with five-folds, stratified based on the RDI values of the recordings, was used in estimating the operating points. Sensitivity and specificity rates for the final classifier were estimated using a two-layer assessment approach with the operating points chosen at the inner layer using five-fold cross-validation and the choice assessed at the outer layer using repeated learning-testing. The nasal pressure measurement showed higher accuracy compared to other respiratory measurements when used alone. The nasal pressure and thoracic movement measurements were identified as the best pair of measurements to be used in a dual channel device. The estimated sensitivity and specificity (standard error) in diagnosing SDB disease (RDI ≥ 15) are 90.3(3.1)% and 88.3(5.5)% when nasal pressure is used alone and together with the thoracic movement it was 89.5(3.7)% and 100.0(0.0)%. Present results suggest that RQA of a single respiratory measurement has potential to be used in an automated SDB screening device, while with dual-channel more reliable accuracy can be expected. Improvements may be possible by including other RQA based features and optimisation of the parameters.
64

Optimal Active Learning: experimental factors and membership query learning

Yu-hui Yeh Unknown Date (has links)
The field of Machine Learning is concerned with the development of algorithms, models and techniques that solve challenging computational problems by learning from data representative of the problem (e.g. given a set of medical images previously classified by a human expert, build a model to predict unseen images as either benign or malignant). Many important real-world problems have been formulated as supervised learning problems. The assumption is that a data set is available containing the correct output (e.g. class label or target value) for each given data point. In many application domains, obtaining the correct outputs (labels) for data points is a costly and time-consuming task. This has provided the motivation for the development of Machine Learning techniques that attempt to minimize the number of labeled data points while maintaining good generalization performance on a given problem. Active Learning is one such class of techniques and is the focus of this thesis. Active Learning algorithms select or generate unlabeled data points to be labeled and use these points for learning. If successful, an Active Learning algorithm should be able to produce learning performance (e.g test set error) comparable to an equivalent supervised learner using fewer labeled data points. Theoretical, algorithmic and experimental Active Learning research has been conducted and a number of successful applications have been demonstrated. However, the scope of many of the experimental studies on Active Learning has been relatively small and there are very few large-scale experimental evaluations of Active Learning techniques. A significant amount of performance variability exists across Active Learning experimental results in the literature. Furthermore, the implementation details and effects of experimental factors have not been closely examined in empirical Active Learning research, creating some doubt over the strength and generality of conclusions that can be drawn from such results. The Active Learning model/system used in this thesis is the Optimal Active Learning algorithm framework with Gaussian Processes for regression problems (however, most of the research questions are of general interest in many other Active Learning scenarios). Experimental and implementation details of the Active Learning system used are described in detail, using a number of regression problems and datasets of different types. It is shown that the experimental results of the system are subject to significant variability across problem datasets. The hypothesis that experimental factors can account for this variability is then investigated. The results show the impact of sampling and sizes of the datasets used when generating experimental results. Furthermore, preliminary experimental results expose performance variability across various real-world regression problems. The results suggest that these experimental factors can (to a large extent) account for the variability observed in experimental results. A novel resampling technique for Optimal Active Learning, called '3-Sets Cross-Validation', is proposed as a practical solution to reduce experimental performance variability. Further results confirm the usefulness of the technique. The thesis then proposes an extension to the Optimal Active Learning framework, to perform learning via membership queries via a novel algorithm named MQOAL. The MQOAL algorithm employs the Metropolis-Hastings Markov chain Monte Carlo (MCMC) method to sample data points for query selection. Experimental results show that MQOAL provides comparable performance to the pool-based OAL learner, using a very generic, simple MCMC technique, and is robust to experimental factors related to the MCMC implementation. The possibility of making queries in batches is also explored experimentally, with results showing that while some performance degradation does occur, it is minimal for learning in small batch sizes, which is likely to be valuable in some real-world problem domains.
65

Machine learning in logistics : Increasing the performance of machine learning algorithms on two specific logistic problems / Maskininlärning i logistik : Öka prestandan av maskininlärningsalgoritmer på två specifika logistikproblem.

Lind Nilsson, Rasmus January 2017 (has links)
Data Ductus, a multination IT-consulting company, wants to develop an AI that monitors a logistic system and looks for errors. Once trained enough, this AI will suggest a correction and automatically right issues if they arise. This project presents how one works with machine learning problems and provides a deeper insight into how cross-validation and regularisation, among other techniques, are used to improve the performance of machine learning algorithms on the defined problem. Three techniques are tested and evaluated in our logistic system on three different machine learning algorithms, namely Naïve Bayes, Logistic Regression and Random Forest. The evaluation of the algorithms leads us to conclude that Random Forest, using cross-validated parameters, gives the best performance on our specific problems, with the other two falling behind in each tested category. It became clear to us that cross-validation is a simple, yet powerful tool for increasing the performance of machine learning algorithms. / Data Ductus, ett multinationellt IT-konsultföretag vill utveckla en AI som övervakar ett logistiksystem och uppmärksammar fel. När denna AI är tillräckligt upplärd ska den föreslå korrigering eller automatiskt korrigera problem som uppstår. Detta projekt presenterar hur man arbetar med maskininlärningsproblem och ger en djupare inblick i hur kors-validering och regularisering, bland andra tekniker, används för att förbättra prestandan av maskininlärningsalgoritmer på det definierade problemet. Dessa tekniker testas och utvärderas i vårt logistiksystem på tre olika maskininlärnings algoritmer, nämligen Naïve Bayes, Logistic Regression och Random Forest. Utvärderingen av algoritmerna leder oss till att slutsatsen är att Random Forest, som använder korsvaliderade parametrar, ger bästa prestanda på våra specifika problem, medan de andra två faller bakom i varje testad kategori. Det blev klart för oss att kors-validering är ett enkelt, men kraftfullt verktyg för att öka prestanda hos maskininlärningsalgoritmer.
66

Multilinear technics in face recognition / TÃcnicas multilineares em reconhecimento facial

Emanuel Dario Rodrigues Sena 07 November 2014 (has links)
CoordenaÃÃo de AperfeiÃoamento de NÃvel Superior / In this dissertation, the face recognition problem is investigated from the standpoint of multilinear algebra, more specifically the tensor decomposition, and by making use of Gabor wavelets. The feature extraction occurs in two stages: first the Gabor wavelets are applied holistically in feature selection; Secondly facial images are modeled as a higher-order tensor according to the multimodal factors present. Then, the HOSVD is applied to separate the multimodal factors of the images. The proposed facial recognition approach exhibits higher average success rate and stability when there is variation in the various multimodal factors such as facial position, lighting condition and facial expression. We also propose a systematic way to perform cross-validation on tensor models to estimate the error rate in face recognition systems that explore the nature of the multimodal ensemble. Through the random partitioning of data organized as a tensor, the mode-n cross-validation provides folds as subtensors extracted of the desired mode, featuring a stratified method and susceptible to repetition of cross-validation with different partitioning. / Nesta dissertaÃÃo o problema de reconhecimento facial à investigado do ponto de vista da Ãlgebra multilinear, mais especificamente por meio de decomposiÃÃes tensoriais fazendo uso das wavelets de Gabor. A extraÃÃo de caracterÃsticas ocorre em dois estÃgios: primeiramente as wavelets de Gabor sÃo aplicadas de maneira holÃstica na seleÃÃo de caracterÃsticas; em segundo as imagens faciais sÃo modeladas como um tensor de ordem superior de acordo com o fatores multimodais presentes. Com isso aplicamos a decomposiÃÃo tensorial Higher Order Singular Value Decomposition (HOSVD) para separar os fatores que influenciam na formaÃÃo das imagens. O mÃtodo de reconhecimento facial proposto possui uma alta taxa de acerto e estabilidade quando hà variaÃÃo nos diversos fatores multimodais, tais como, posiÃÃo facial, condiÃÃo de iluminaÃÃo e expressÃo facial. Propomos ainda uma maneira sistemÃtica para realizaÃÃo da validaÃÃo cruzada em modelos tensoriais para estimaÃÃo da taxa de erro em sistemas de reconhecimento facial que exploram a natureza multilinear do conjunto de imagens. AtravÃs do particionamento aleatÃrio dos dados organizado como um tensor, a validaÃÃo cruzada modo-n proporciona a criaÃÃo de folds extraindo subtensores no modo desejado, caracterizando um mÃtodo estratificado e susceptÃvel a repetiÃÃes da validaÃÃo cruzada com diferentes particionamentos.
67

交叉驗證用於迴歸樣條的模型選擇之探討

謝式斌 Unknown Date (has links)
在無母數的迴歸當中,因為原始的函數類型未知,所以常用已知特定類型的函數來近似未知的函數,而spline函數也可以用來近似未知的函數,但是要估計spline函數就需要設定節點(knots),越多的節點越能準確近似原始函數的內容,可是如果節點太多有較多的參數要估計, 就會變得比較不準確,所以選擇適合節點個數就變得很重要。 在本研究中,用交叉驗證的方式來尋找適合的節點個數, 考慮了幾種不同切割資料方式來決定訓練資料和測試資料, 並比較不同切割資料的方式下選擇節點的結果與函數估計的效果。 / In this thesis, I consider the problem of estimating an unknown regression function using spline approximation. Splines are piecewise polynomials jointed at knots. When using splines to approximate unknown functions, it is crucial to determine the number of knots and the knot locations. In this thesis, I determine the knot locations using least squares for given a given number of knots, and use cross-validation to find appropriate number of knots. I consider three methods to split the data into training data and testing data, and compare the estimation results.
68

The prediction of mutagenicity and pKa for pharmaceutically relevant compounds using 'quantum chemical topology' descriptors

Harding, Alexander January 2011 (has links)
Quantum Chemical Topology (QCT) descriptors, calculated from ab initio wave functions, have been utilised to model pKa and mutagenicity for data sets of pharmaceutically relevant compounds. The pKa of a compound is a pivotal property in both life science and chemistry since the propensity of a compound to donate or accept a proton is fundamental to understanding chemical and biological processes. The prediction of mutagenicity, specifically as determined by the Ames test, is important to aid medicinal chemists select compounds avoiding this potential pitfall in drug design. Carbocyclic and heterocyclic aromatic amines were chosen because this compounds class is synthetically very useful but also prone to positive outcomes in the battery of genotoxicity assays.The importance of pKa and genotoxic characteristics cannot be overestimated in drug design, where the multivariate optimisations of properties that influence the Absorption-Distribution-Metabolism-Excretion-Toxicity (ADMET) profiles now features very early on in the drug discovery process.Models were constructed using carboxylic acids in conjunction with the Quantum Topological Molecular Similarity (QTMS) method. The models produced Root Mean Square Error of Prediction (RMSEP) values of less than 0.5 pKa units and compared favourably to other pKa prediction methods. The ortho-substituted benzoic acids had the largest RMSEP which was significantly improved by splitting the compounds into high-correlation subsets. For these subsets, single-term equations containing one ab initio bond length were able to accurately predict pKa. The pKa prediction equations were extended to phenols and anilines.Quantitative Structure Activity Relationship (QSAR) models of acceptable quality were built based on literature data to predict the mutagenic potency (LogMP) of carbo- and heterocyclic aromatic amines using QTMS. However, these models failed to predict Ames test values for compounds screened at GSK. Contradictory internal and external data for several compounds motivated us to determine the fidelity of the Ames test for this compound class. The systematic investigation involved recrystallisation to purify compounds, analytical methods to measure the purity and finally comparative Ames testing. Unexpectedly, the Ames test results were very reproducible when 14 representative repurified molecules were tested as the freebase and the hydrochloride salt in two different solvents (water and DMSO). This work formed the basis for the analysis of Ames data at GSK and a systematic Ames testing programme for aromatic amines. So far, an unprecedentedly large list of 400 compounds has been made available to guide medicinal chemists. We constructed a model for the subset of 100 meta-/para-substituted anilines that could predict 70% of the Ames classifications. The experimental values of several of the model outliers appeared questionable after closer inspection and three of these have been retested so far. The retests lead to the reclassification of two of them and thereby to improved model accuracy of 78%. This demonstrates the power of the iterative process of model building, critical analysis of experimental data, retesting outliers and rebuilding the model.
69

Estimativa das funções de recuperação de reservas minerais usando copulas / Estimation of recovers function of mineral reserves using copulas

Carmo, Frederico Augusto Rosa do 24 August 2006 (has links)
Orientador: Armando Zaupa Remacre / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Geociencias / Made available in DSpace on 2018-08-07T09:52:07Z (GMT). No. of bitstreams: 1 Carmo_FredericoAugustoRosado_D.pdf: 2790866 bytes, checksum: 70c1d59f281ee0f7a09af528c73582a9 (MD5) Previous issue date: 2006 / Resumo: O objetivo principal desta tese foi desenvolver a metodologia de cópulas aplicada ao problema de estimativas de reservas condicionadas, corrigindo erros de tonelagem e quantidade de minério de um projeto, via uma abordagem diferente da simulação estocástica condicional. É apresentado um resumo teórico que fundamenta o estudo de cópulas. Inicia-se com a apresentação de definições e conceitos importantes da estatística e da probabilidade. Após uma discussão sobre medidas de correlação, é introduzido o conceito de cópulas, desde sua definição e propriedades básicas até o estudo de alguns tipos de cópulas essenciais para a aplicação nesta tese. É discutida toda a fundamentação teórica desenvolvida para o cálculo de recursos recuperáveis. Os conceitos de curvas de tonelagem e teores são introduzidos, pois são a base da parametrização de reservas minerais. É mostrado como a cópula pode ser utilizada num dos pontos principais da geoestatística mineira, principalmente no que diz respeito ao erro das estimativas. Discorre-se primeiramente sobre o conceito de validação cruzada, apresentando a definição de reserva ilusória, ótima e ideal. É definida a reserva ideal utilizando o conceito de cópulas, onde a krigagem, a simulação seqüencial gaussiana e a cópula são comparadas, mostrando as conseqüências da sobreestimativa e da subestimativa em projetos de cava e seqüenciamento na mineração / Abstract: The aim of this thesis was to develop the applied methodology of copulas in the problem of conditional reserves estimation. The copulas have a different approach from sequential gaussian simulation and in this thesis was used to correct the tonnage and ore quantity of a mining project. It is presented a theoretical summary that is the bases to the study of copulas. It is also' presented a set of definitions and important concepts of the statistics and the probability. After a discussion about correlation measures, is introducing the concept of copulas, begining with the definition and basic properties until the study of some types of essential copulas that was applied in this thesis. Whole the theoretical fundamentation is discussed to developed the calculation of recoverable resources. The concepts of tonnage and grades curves are introduced, therefore they are the base of the parametrization of mineral reserves. It is shown how the copulas can be used in the main points of the mining geostatistics, mainly in what concerns the estimation errors. Firstly the cross validation concept is presented and the illusory, best and ideal reserves are defined. The ideal reserves is defined using the concept of copulas, and the results are compared with the kriging and sequential gaussian simulation. With this comparisons is possible shown the consequences of the upper-estimation and under estimation in an open pit projects and sequential mining layout / Doutorado / Administração e Politica de Recursos Minerais / Doutor em Ciências
70

Meta-learning / Meta-learning

Hovorka, Martin January 2008 (has links)
Goal of this work is to make acquaintance and study meta-learningu methods, program algorithm and compare with other machine learning methods.

Page generated in 0.1038 seconds